158 research outputs found
Brain Tumor Synthetic Segmentation in 3D Multimodal MRI Scans
The magnetic resonance (MR) analysis of brain tumors is widely used for
diagnosis and examination of tumor subregions. The overlapping area among the
intensity distribution of healthy, enhancing, non-enhancing, and edema regions
makes the automatic segmentation a challenging task. Here, we show that a
convolutional neural network trained on high-contrast images can transform the
intensity distribution of brain lesions in its internal subregions.
Specifically, a generative adversarial network (GAN) is extended to synthesize
high-contrast images. A comparison of these synthetic images and real images of
brain tumor tissue in MR scans showed significant segmentation improvement and
decreased the number of real channels for segmentation. The synthetic images
are used as a substitute for real channels and can bypass real modalities in
the multimodal brain tumor segmentation framework. Segmentation results on
BraTS 2019 dataset demonstrate that our proposed approach can efficiently
segment the tumor areas. In the end, we predict patient survival time based on
volumetric features of the tumor subregions as well as the age of each case
through several regression models
Representation learning for cross-modality classification
Differences in scanning parameters or modalities can complicate image analysis based on supervised classification. This paper presents two representation learning approaches, based on autoencoders, that address this problem by learning representations that are similar across domains. Both approaches use, next to the data representation objective, a similarity objective to minimise the difference between representations of corresponding patches from each domain. We evaluated the methods in transfer learning experiments on multi-modal brain MRI data and on synthetic data. After transforming training and test data from different modalities to the common representations learned by our methods, we trained classifiers for each of pair of modalities. We found that adding the similarity term to the standard objective can produce representations that are more similar and can give a higher accuracy in these cross-modality classification experiments
TuNet: End-to-end Hierarchical Brain Tumor Segmentation using Cascaded Networks
Glioma is one of the most common types of brain tumors; it arises in the
glial cells in the human brain and in the spinal cord. In addition to having a
high mortality rate, glioma treatment is also very expensive. Hence, automatic
and accurate segmentation and measurement from the early stages are critical in
order to prolong the survival rates of the patients and to reduce the costs of
the treatment. In the present work, we propose a novel end-to-end cascaded
network for semantic segmentation that utilizes the hierarchical structure of
the tumor sub-regions with ResNet-like blocks and Squeeze-and-Excitation
modules after each convolution and concatenation block. By utilizing
cross-validation, an average ensemble technique, and a simple post-processing
technique, we obtained dice scores of 88.06, 80.84, and 80.29, and Hausdorff
Distances (95th percentile) of 6.10, 5.17, and 2.21 for the whole tumor, tumor
core, and enhancing tumor, respectively, on the online test set.Comment: Accepted at MICCAI BrainLes 201
Hetero-Modal Variational Encoder-Decoder for Joint Modality Completion and Segmentation
We propose a new deep learning method for tumour segmentation when dealing
with missing imaging modalities. Instead of producing one network for each
possible subset of observed modalities or using arithmetic operations to
combine feature maps, our hetero-modal variational 3D encoder-decoder
independently embeds all observed modalities into a shared latent
representation. Missing data and tumour segmentation can be then generated from
this embedding. In our scenario, the input is a random subset of modalities. We
demonstrate that the optimisation problem can be seen as a mixture sampling. In
addition to this, we introduce a new network architecture building upon both
the 3D U-Net and the Multi-Modal Variational Auto-Encoder (MVAE). Finally, we
evaluate our method on BraTS2018 using subsets of the imaging modalities as
input. Our model outperforms the current state-of-the-art method for dealing
with missing modalities and achieves similar performance to the subset-specific
equivalent networks.Comment: Accepted at MICCAI 201
Qunatification of Metabolites in MR Spectroscopic Imaging using Machine Learning
Magnetic Resonance Spectroscopic Imaging (MRSI) is a clinical imaging
modality for measuring tissue metabolite levels in-vivo. An accurate estimation
of spectral parameters allows for better assessment of spectral quality and
metabolite concentration levels. The current gold standard quantification
method is the LCModel - a commercial fitting tool. However, this fails for
spectra having poor signal-to-noise ratio (SNR) or a large number of artifacts.
This paper introduces a framework based on random forest regression for
accurate estimation of the output parameters of a model based analysis of MR
spectroscopy data. The goal of our proposed framework is to learn the spectral
features from a training set comprising of different variations of both
simulated and in-vivo brain spectra and then use this learning for the
subsequent metabolite quantification. Experiments involve training and testing
on simulated and in-vivo human brain spectra. We estimate parameters such as
concentration of metabolites and compare our results with that from the
LCModel
Convolutional 3D to 2D Patch Conversion for Pixel-wise Glioma Segmentation in MRI Scans
Structural magnetic resonance imaging (MRI) has been widely utilized for
analysis and diagnosis of brain diseases. Automatic segmentation of brain
tumors is a challenging task for computer-aided diagnosis due to low-tissue
contrast in the tumor subregions. To overcome this, we devise a novel
pixel-wise segmentation framework through a convolutional 3D to 2D MR patch
conversion model to predict class labels of the central pixel in the input
sliding patches. Precisely, we first extract 3D patches from each modality to
calibrate slices through the squeeze and excitation (SE) block. Then, the
output of the SE block is fed directly into subsequent bottleneck layers to
reduce the number of channels. Finally, the calibrated 2D slices are
concatenated to obtain multimodal features through a 2D convolutional neural
network (CNN) for prediction of the central pixel. In our architecture, both
local inter-slice and global intra-slice features are jointly exploited to
predict class label of the central voxel in a given patch through the 2D CNN
classifier. We implicitly apply all modalities through trainable parameters to
assign weights to the contributions of each sequence for segmentation.
Experimental results on the segmentation of brain tumors in multimodal MRI
scans (BraTS'19) demonstrate that our proposed method can efficiently segment
the tumor regions
Automatic Brain Tumor Segmentation using Convolutional Neural Networks with Test-Time Augmentation
Automatic brain tumor segmentation plays an important role for diagnosis,
surgical planning and treatment assessment of brain tumors. Deep convolutional
neural networks (CNNs) have been widely used for this task. Due to the
relatively small data set for training, data augmentation at training time has
been commonly used for better performance of CNNs. Recent works also
demonstrated the usefulness of using augmentation at test time, in addition to
training time, for achieving more robust predictions. We investigate how
test-time augmentation can improve CNNs' performance for brain tumor
segmentation. We used different underpinning network structures and augmented
the image by 3D rotation, flipping, scaling and adding random noise at both
training and test time. Experiments with BraTS 2018 training and validation set
show that test-time augmentation helps to improve the brain tumor segmentation
accuracy and obtain uncertainty estimation of the segmentation results.Comment: 12 pages, 3 figures, MICCAI BrainLes 201
Brain Tumor Segmentation from Multi-Spectral MR Image Data Using Random Forest Classifier
The development of brain tumor segmentation techniques based on multi-spectral MR image data has relevant impact on the clinical practice via better diagnosis, radiotherapy planning and follow-up studies. This task is also very challenging due to the great variety of tumor appearances, the presence of several noise effects, and the differences in scanner sensitivity. This paper proposes an automatic procedure trained to distinguish gliomas from normal brain tissues in multi-spectral
MRI data. The procedure is based on a random forest (RF) classifier, which uses 80 computed features beside the four observed ones, including morphological ones, gradients, and Gabor wavelet features. The intermediary segmentation outcome provided by the RF is fed to a twofold post-processing, which regularizes the shape of detected tumors and enhances the segmentation accuracy. The performance of the procedure was evaluated using the 274 records of the BraTS 2015 train data set. The
achieved overall Dice scores between 85-86% represent highly accurate segmentation
3D U-Net Based Brain Tumor Segmentation and Survival Days Prediction
Past few years have witnessed the prevalence of deep learning in many
application scenarios, among which is medical image processing. Diagnosis and
treatment of brain tumors requires an accurate and reliable segmentation of
brain tumors as a prerequisite. However, such work conventionally requires
brain surgeons significant amount of time. Computer vision techniques could
provide surgeons a relief from the tedious marking procedure. In this paper, a
3D U-net based deep learning model has been trained with the help of brain-wise
normalization and patching strategies for the brain tumor segmentation task in
the BraTS 2019 competition. Dice coefficients for enhancing tumor, tumor core,
and the whole tumor are 0.737, 0.807 and 0.894 respectively on the validation
dataset. These three values on the test dataset are 0.778, 0.798 and 0.852.
Furthermore, numerical features including ratio of tumor size to brain size and
the area of tumor surface as well as age of subjects are extracted from
predicted tumor labels and have been used for the overall survival days
prediction task. The accuracy could be 0.448 on the validation dataset, and
0.551 on the final test dataset.Comment: Third place award of the 2019 MICCAI BraTS challenge survival task
[BraTS 2019](https://www.med.upenn.edu/cbica/brats2019.html
- …